smooth minimax game
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
We consider the problem of online learning and its application to solving minimax games. For the online learning problem, Follow the Perturbed Leader (FTPL) is a widely studied algorithm which enjoys the optimal $O(T^{1/2})$ \emph{worst case} regret guarantee for both convex and nonconvex losses. In this work, we show that when the sequence of loss functions is \emph{predictable}, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences. A key challenge in obtaining these tighter regret bounds is the stochasticity and optimism in the algorithm, which requires different analysis techniques than those commonly used in the analysis of FTPL. The key ingredient we utilize in our analysis is the dual view of perturbation as regularization.
Review for NeurIPS paper: Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
Summary and Contributions: - It is known in the literature that optimistic variants of FTRL algorithm can yield better bounds when the sequence of loss functions are predictable. Such results are relatively rare for FTPL. This paper proposes the optimistic variant of the FTPL algorithm, which in the worst case known optimal bounds, but has the potential to achieve better regret for predictable sequence of loss functions. Specifically, the bounds depend on the g_t - abla_t _* where g_t is the estimate of the gradient for the next loss function and abla_t is the observed gradient. They instantiate this generic result for the worst case analysis via treating the future estimate g_t 0 and achieve the optimal O(T {\frac{1}{2}}) regret.
Review for NeurIPS paper: Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
The paper provides a follow the perturbed leader algorithm and analysis that can obtain better regret bounds when loss/gradient sequence is predictable. The proofs relies on using the equivalent regularization view of FTPL. The authors also provide an application of this result to providing a parallelizable algorithms for solving smooth convex concave saddlepoint games Most of the reviewers found the result interesting. Please address the concerns of the reviewer. Personally, I find the predictable sequences result interesting.
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
We consider the problem of online learning and its application to solving minimax games. For the online learning problem, Follow the Perturbed Leader (FTPL) is a widely studied algorithm which enjoys the optimal O(T {1/2}) \emph{worst case} regret guarantee for both convex and nonconvex losses. In this work, we show that when the sequence of loss functions is \emph{predictable}, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences. A key challenge in obtaining these tighter regret bounds is the stochasticity and optimism in the algorithm, which requires different analysis techniques than those commonly used in the analysis of FTPL. The key ingredient we utilize in our analysis is the dual view of perturbation as regularization.